142 research outputs found
Recommended from our members
An educational paradigm for teaching computer forensics
Teaching Computer Forensics to students at postgraduate and undergraduate levels is a challenge. Creating an assignment that is both realistic and also helpful to students when pursuing careers in this competitive area is also a demanding task for the lecturer. A problem-based learning (PBL) strategy has been used to increase the employability of the students, by designing a real-world problem for the students to solve. It can be shown that this enhances the employability skills of the students when it comes to finding jobs. The coursework is based around a case study. To add an extra dimension to the assessment we involved final year Law students from the School of Humanities, Law Department, to act as jury members and also to help to cross-examine the postgraduate students while they presented their findings in the role of an Expert Witness. This created at the same time a valuable exercise for the legal students in the context that evidence presented in courts is increasingly computer-based evidence. This paper discusses the preparation of the evidence files, how employability is enhanced by the use of a PBL approach to teaching, the process of evaluating the results of the students work and concludes with an overview of the student experience for all students involved
Recommended from our members
Performance analysis of an ATM network with multimedia traffic: a simulation study
Traffic and congestion control are important in enabling ATM networks to maintain the Quality of Service (QoS) required by end users. A Call Admission Control (CAC) strategy ensures that the network has sufficient resources available at the start of each call, but this does not prevent a traffic source from violating the negotiated contract. A policing strategy (User Parameter Control (UPC)) is also required to enforce the negotiated rates for a particular connection and to protect conforming users from network overload.
The aim of this work is to investigate traffic policing and bandwidth management at the User to Network Interface (UNI). A policing function is proposed which is based on the leaky bucket (LB) which offers improved performance for both real time (RT) traffic such as speech and video and non-real time (non-RT) traffic, mainly data by taking into account the QoS requirements. A video cell in violation of the negotiated bit rate causes the remainder of the slice to be discarded. This 'tail clipping' provides protection for the decoder from damaged video slices. Speech cells are coded using a frequency domain coder, which places the most significant bits of a double speech sample into a high priority cell and the least significant bits into a high priority cell. In the case of congestion, the low priority cell can be discarded with little impact on the intelligibility of the received speech. However, data cells require loss-free delivery and are buffered rather than being discarded or tagged for subsequent deletion. This triple strategy is termed the super leaky bucket (SLB).
Separate queues for RT and non-RT traffic, are also proposed at the multiplexer, with non pre-emptive priority service for RT traffic if the queue exceeds a predetermined threshold. If the RT queue continues to grow beyond a second threshold, then all low priority cells (mainly speech) are discarded. This scheme protects non-RT traffic from being tagged and subsequently discarded, by queueing the cells and also by throttling back non-RT sources during periods of congestion. It also prevents the RT cells from being delayed excessively in the multiplexer queue.
A simulation model has been designed and implemented to test the proposal. Realistic sources have been incorporated into the model to simulate the types of traffic which could be expected on an ATM network.
The results show that the S-LB outperforms the standard LB for video cells. The number of cells discarded and the resulting number of damaged video slices are significantly reduced. Dual queues with cyclic service at the multiplexer also reduce the delays experienced by RT cells. The QoS for all categories of traffic is preserved
Recommended from our members
Forensic analysis of digital attack tool artifacts
This work was to investigate the forensics artifacts left by network attack tools within Linux and UNIX operating systems and to develop an application called HexaFind. The application enables a forensics investigator to collect the digital evidence left behind by the usage, installation or removal of specific attack tools. The main objective was to decrease the complexity of forensic investigations within these operating systems and to increase the detection rate of forensic artifacts relating to criminal or civil evidence of malicious conduct
Recommended from our members
Forensic investigation into Mac OS X volatile memory
An important area for forensic investigations is live memory analysis captured from a running machine. The RAM may provide an in depth picture of the system when it was seized which could reveal many vital pieces of evidence otherwise unobtainable on the computer hard disk. Research in this area is relatively low on all platforms, but especially for Mac OS X. The aim of this work was to investigate volatile memory analysis for a Mac and to develop a tool, called VolaGUI, to assist forensic examiners when analyzing volatile memory
Recommended from our members
Social engineering in the internet of everything
The Internet of Everything is becoming a reality, with fridges, smart TVs, cars, medical monitoring equipment and industrial control systems all communicating via the Internet. There have already been cases where smart toys have been exploited by hackers to steal personal information. Social engineering attacks on these devices which have little or no security are already a reality with hackers being quick to exploit any weakness in cyber space. This article reviews past cases, gives three scenarios which could lead to the owner of an IoT device giving private information to hackers and then proposes defence recommendations
Social networking privacy — Who's stalking you?
This research investigates the privacy issues that exist on social networking sites. It is reasonable to assume that many Twitter users are unaware of the dangers of uploading a tweet to their timeline which can be seen by anyone. Enabling geo-location tagging on tweets can result in personal information leakage, which the user did not intend to be public and which can seriously affect that user’s privacy and anonymity online. This research demonstrates that key information can easily be retrieved using the starting point of a single tweet with geo-location turned on. A series of experiments have been undertaken to determine how much information can be obtained about a particular individual using only social networking sites and freely available mining tools. The information gathered enabled the target subjects to be identified on other social networking sites such as Foursquare, Instagram, LinkedIn, Facebook and Google+, where more personal information was leaked. The tools used are discussed, the results of the experiments are presented and the privacy implications are examined
The V-Network testbed for malware analysis
This paper presents a virtualised network environment that serves as a stable and re-usable platform for the analysis of malware propagation. The platform, which has been developed using VMware virtualisation technology, enables the use of either a graphical user interface or scripts to create virtual networks, clone, restart and take snapshots of virtual machines, reset experiments, clean virtual machines and manage the entire infrastructure remotely. The virtualised environment uses open source routing software to support the deployment of intrusion detection systems and other malware attack sensors, and is therefore suitable for evaluating countermeasure systems before deployment on live networks. An empirical analysis of network worm propagation has been conducted using worm outbreak experiments on Class A size networks to demonstrate the capability of the developed platform
An eye for deception: A case study in utilizing the human-as-a-security-sensor paradigm to detect zero-day semantic social engineering attacks
In a number of information security scenarios, human beings can be better than technical security measures at detecting threats. This is particularly the case when a threat is based on deception of the user rather than exploitation of a specific technical flaw, as is the case of spear-phishing, application spoofing, multimedia masquerading and other semantic social engineering attacks. Here, we put the concept of the humanas-a-security-sensor to the test with a first case study on a small number of participants subjected to different attacks in a controlled laboratory environment and provided with a mechanism to report these attacks if they spot them. A key challenge is to estimate the reliability of each report, which we address with a machine learning approach. For comparison, we evaluate the ability of known technical security countermeasures in detecting the same threats. This initial proof of concept study shows that the concept is viable
You are probably not the weakest link: Towards practical prediction of susceptibility to semantic social engineering attacks
Semantic social engineering attacks are a pervasive threat to computer and communication systems. By employing deception rather than by exploiting technical vulnerabilities, spear-phishing, obfuscated URLs, drive-by downloads, spoofed websites, scareware, and other attacks are able to circumvent traditional technical security controls and target the user directly. Our aim is to explore the feasibility of predicting user susceptibility to deception-based attacks through attributes that can be measured, preferably in real-time and in an automated manner. Toward this goal, we have conducted two experiments, the first on 4333 users recruited on the Internet, allowing us to identify useful high-level features through association rule mining, and the second on a smaller group of 315 users, allowing us to study these features in more detail. In both experiments, participants were presented with attack and non-attack exhibits and were tested in terms of their ability to distinguish between the two. Using the data collected, we have determined practical predictors of users' susceptibility against semantic attacks to produce and evaluate a logistic regression and a random forest prediction model, with the accuracy rates of. 68 and. 71, respectively. We have observed that security training makes a noticeable difference in a user's ability to detect deception attempts, with one of the most important features being the time since last self-study, while formal security education through lectures appears to be much less useful as a predictor. Other important features were computer literacy, familiarity, and frequency of access to a specific platform. Depending on an organisation's preferences, the models learned can be configured to minimise false positives or false negatives or maximise accuracy, based on a probability threshold. For both models, a threshold choice of 0.55 would keep both false positives and false negatives below 0.2
Performance evaluation of cyber-physical intrusion detection on a robotic vehicle
Intrusion detection systems designed for con- ventional computer systems and networks are not necessarily suitable for mobile cyber-physical systems, such as robots, drones and automobiles. They tend to be geared towards attacks of different nature and do not take into account mobility, energy consumption and other physical aspects that are vital to a mobile cyber-physical system. We have developed a decision tree-based method for detecting cyber attacks on a small-scale robotic vehicle using both cyber and physical features that can be measured by its on-board systems and processes. We evaluate it experimentally against a variety of scenarios involving denial of service, command injection and two types of malware attacks. We observe that the addition of physical features noticeably improves the detection accuracy for two of the four attack types and reduces the detection latency for all four
- …